Epoch 1: avg free-energy loss = -7.9494
Epoch 2: avg free-energy loss = -16.3580
Epoch 3: avg free-energy loss = -14.2304
Epoch 4: avg free-energy loss = -12.9966
Epoch 5: avg free-energy loss = -12.2340
Classifier Epoch 1: loss = 0.7704
Classifier Epoch 2: loss = 0.5172
Classifier Epoch 3: loss = 0.4715
Classifier Epoch 4: loss = 0.4467
Classifier Epoch 5: loss = 0.4292
Test Accuracy: 83.44%
Macro F1 Score: 0.8276
Epoch 1: avg free-energy loss = -20.6368
Epoch 2: avg free-energy loss = -20.9018
Epoch 3: avg free-energy loss = -18.6343
Epoch 4: avg free-energy loss = -17.2850
Epoch 5: avg free-energy loss = -16.3783
Classifier Epoch 1: loss = 0.8672
Classifier Epoch 2: loss = 0.5673
Classifier Epoch 3: loss = 0.5158
Classifier Epoch 4: loss = 0.4849
Classifier Epoch 5: loss = 0.4681
Test Accuracy: 81.93%
Macro F1 Score: 0.8173
Epoch 1: avg free-energy loss = 296.3109
Epoch 2: avg free-energy loss = 91.2839
Epoch 3: avg free-energy loss = 62.4585
Epoch 4: avg free-energy loss = 46.8922
Epoch 5: avg free-energy loss = 39.4544
Classifier Epoch 1: loss = 0.6366
Classifier Epoch 2: loss = 0.4564
Classifier Epoch 3: loss = 0.4169
Classifier Epoch 4: loss = 0.3942
Classifier Epoch 5: loss = 0.3887
Test Accuracy: 85.08%
Macro F1 Score: 0.8486
Epoch 1: avg free-energy loss = 54.0006
Epoch 2: avg free-energy loss = 23.1856
Epoch 3: avg free-energy loss = 18.4842
Epoch 4: avg free-energy loss = 17.9494
Epoch 5: avg free-energy loss = 15.1389
Classifier Epoch 1: loss = 0.5456
Classifier Epoch 2: loss = 0.4388
Classifier Epoch 3: loss = 0.3897
Classifier Epoch 4: loss = 0.3624
Classifier Epoch 5: loss = 0.3472
Test Accuracy: 84.31%
Macro F1 Score: 0.8381
Epoch 1: avg free-energy loss = 226.4909
Epoch 2: avg free-energy loss = 67.9030
Epoch 3: avg free-energy loss = 46.5347
Epoch 4: avg free-energy loss = 36.2489
Epoch 5: avg free-energy loss = 31.9510
Classifier Epoch 1: loss = 0.6282
Classifier Epoch 2: loss = 0.4524
Classifier Epoch 3: loss = 0.4191
Classifier Epoch 4: loss = 0.4028
Classifier Epoch 5: loss = 0.3858
Test Accuracy: 84.74%
Macro F1 Score: 0.8392
Epoch 1: avg free-energy loss = 164.7099
Epoch 2: avg free-energy loss = 55.8821
Epoch 3: avg free-energy loss = 38.4960
Epoch 4: avg free-energy loss = 28.1885
Epoch 5: avg free-energy loss = 29.2875
Classifier Epoch 1: loss = 0.5411
Classifier Epoch 2: loss = 0.4347
Classifier Epoch 3: loss = 0.4121
Classifier Epoch 4: loss = 0.3994
Classifier Epoch 5: loss = 0.3632
Test Accuracy: 83.91%
Macro F1 Score: 0.8380
Epoch 1: avg free-energy loss = 53.6455
Epoch 2: avg free-energy loss = 11.5056
Epoch 3: avg free-energy loss = 7.1479
Epoch 4: avg free-energy loss = 4.5464
Epoch 5: avg free-energy loss = 3.2979
Classifier Epoch 1: loss = 0.8040
Classifier Epoch 2: loss = 0.5026
Classifier Epoch 3: loss = 0.4601
Classifier Epoch 4: loss = 0.4359
Classifier Epoch 5: loss = 0.4193
Test Accuracy: 83.97%
Macro F1 Score: 0.8386
Epoch 1: avg free-energy loss = 3.3027
Epoch 2: avg free-energy loss = -0.9166
Epoch 3: avg free-energy loss = -0.8084
Epoch 4: avg free-energy loss = -0.7366
Epoch 5: avg free-energy loss = -0.6085
Classifier Epoch 1: loss = 0.5380
Classifier Epoch 2: loss = 0.4178
Classifier Epoch 3: loss = 0.3840
Classifier Epoch 4: loss = 0.3659
Classifier Epoch 5: loss = 0.3454
Test Accuracy: 86.44%
Macro F1 Score: 0.8637
Epoch 1: avg free-energy loss = 196.7385
Epoch 2: avg free-energy loss = 59.4501
Epoch 3: avg free-energy loss = 39.2503
Epoch 4: avg free-energy loss = 30.3215
Epoch 5: avg free-energy loss = 24.9154
Classifier Epoch 1: loss = 0.5843
Classifier Epoch 2: loss = 0.4252
Classifier Epoch 3: loss = 0.4012
Classifier Epoch 4: loss = 0.3675
Classifier Epoch 5: loss = 0.3511
Test Accuracy: 85.95%
Macro F1 Score: 0.8605
Epoch 1: avg free-energy loss = 41.2625
Epoch 2: avg free-energy loss = 11.5472
Epoch 3: avg free-energy loss = 8.1285
Epoch 4: avg free-energy loss = 6.4034
Epoch 5: avg free-energy loss = 5.3511
Classifier Epoch 1: loss = 0.5370
Classifier Epoch 2: loss = 0.3989
Classifier Epoch 3: loss = 0.3668
Classifier Epoch 4: loss = 0.3484
Classifier Epoch 5: loss = 0.3259
Test Accuracy: 86.43%
Macro F1 Score: 0.8634
Epoch 1: avg free-energy loss = 75.1814
Epoch 2: avg free-energy loss = 20.2457
Epoch 3: avg free-energy loss = 14.8070
Epoch 4: avg free-energy loss = 12.2811
Epoch 5: avg free-energy loss = 10.7887
Classifier Epoch 1: loss = 0.5874
Classifier Epoch 2: loss = 0.4288
Classifier Epoch 3: loss = 0.3944
Classifier Epoch 4: loss = 0.3689
Classifier Epoch 5: loss = 0.3533
Test Accuracy: 86.48%
Macro F1 Score: 0.8623
Epoch 1: avg free-energy loss = 84.7989
Epoch 2: avg free-energy loss = 22.2254
Epoch 3: avg free-energy loss = 15.9761
Epoch 4: avg free-energy loss = 13.1407
Epoch 5: avg free-energy loss = 11.4755
Classifier Epoch 1: loss = 0.5873
Classifier Epoch 2: loss = 0.4339
Classifier Epoch 3: loss = 0.3899
Classifier Epoch 4: loss = 0.3677
Classifier Epoch 5: loss = 0.3529
Test Accuracy: 86.77%
Macro F1 Score: 0.8651
Epoch 1: avg free-energy loss = 92.5530
Epoch 2: avg free-energy loss = 25.0804
Epoch 3: avg free-energy loss = 18.2475
Epoch 4: avg free-energy loss = 14.6714
Epoch 5: avg free-energy loss = 12.6846
Classifier Epoch 1: loss = 0.6079
Classifier Epoch 2: loss = 0.4299
Classifier Epoch 3: loss = 0.4033
Classifier Epoch 4: loss = 0.3714
Classifier Epoch 5: loss = 0.3539
Test Accuracy: 86.08%
Macro F1 Score: 0.8603
Epoch 1: avg free-energy loss = 65.8722
Epoch 2: avg free-energy loss = 18.0436
Epoch 3: avg free-energy loss = 13.3529
Epoch 4: avg free-energy loss = 10.9941
Epoch 5: avg free-energy loss = 9.6961
Classifier Epoch 1: loss = 0.6006
Classifier Epoch 2: loss = 0.4269
Classifier Epoch 3: loss = 0.3847
Classifier Epoch 4: loss = 0.3657
Classifier Epoch 5: loss = 0.3514
Test Accuracy: 85.65%
Macro F1 Score: 0.8531
Epoch 1: avg free-energy loss = 176.3672
Epoch 2: avg free-energy loss = 48.0087
Epoch 3: avg free-energy loss = 31.4469
Epoch 4: avg free-energy loss = 24.4371
Epoch 5: avg free-energy loss = 20.4467
Classifier Epoch 1: loss = 0.6424
Classifier Epoch 2: loss = 0.4450
Classifier Epoch 3: loss = 0.4096
Classifier Epoch 4: loss = 0.3909
Classifier Epoch 5: loss = 0.3690
Test Accuracy: 85.61%
Macro F1 Score: 0.8556
Epoch 1: avg free-energy loss = 68.3595
Epoch 2: avg free-energy loss = 18.6610
Epoch 3: avg free-energy loss = 13.3781
Epoch 4: avg free-energy loss = 11.2155
Epoch 5: avg free-energy loss = 9.7690
Classifier Epoch 1: loss = 0.6141
Classifier Epoch 2: loss = 0.4468
Classifier Epoch 3: loss = 0.4112
Classifier Epoch 4: loss = 0.3930
Classifier Epoch 5: loss = 0.3782
Test Accuracy: 85.45%
Macro F1 Score: 0.8489
Epoch 1: avg free-energy loss = 103.7954
Epoch 2: avg free-energy loss = 28.6273
Epoch 3: avg free-energy loss = 20.5303
Epoch 4: avg free-energy loss = 16.4765
Epoch 5: avg free-energy loss = 14.3530
Classifier Epoch 1: loss = 0.5685
Classifier Epoch 2: loss = 0.4234
Classifier Epoch 3: loss = 0.3888
Classifier Epoch 4: loss = 0.3640
Classifier Epoch 5: loss = 0.3481
Test Accuracy: 85.80%
Macro F1 Score: 0.8587
Epoch 1: avg free-energy loss = 96.9961
Epoch 2: avg free-energy loss = 25.8510
Epoch 3: avg free-energy loss = 18.9783
Epoch 4: avg free-energy loss = 16.2600
Epoch 5: avg free-energy loss = 14.3283
Classifier Epoch 1: loss = 0.6185
Classifier Epoch 2: loss = 0.4455
Classifier Epoch 3: loss = 0.4120
Classifier Epoch 4: loss = 0.3797
Classifier Epoch 5: loss = 0.3625
Test Accuracy: 84.43%
Macro F1 Score: 0.8430
Epoch 1: avg free-energy loss = 59.5948
Epoch 2: avg free-energy loss = 14.7077
Epoch 3: avg free-energy loss = 10.9849
Epoch 4: avg free-energy loss = 9.0216
Epoch 5: avg free-energy loss = 7.8285
Classifier Epoch 1: loss = 0.6529
Classifier Epoch 2: loss = 0.4630
Classifier Epoch 3: loss = 0.4274
Classifier Epoch 4: loss = 0.4015
Classifier Epoch 5: loss = 0.3811
Test Accuracy: 85.17%
Macro F1 Score: 0.8474
Epoch 1: avg free-energy loss = 143.3618
Epoch 2: avg free-energy loss = 39.3630
Epoch 3: avg free-energy loss = 25.9254
Epoch 4: avg free-energy loss = 20.3991
Epoch 5: avg free-energy loss = 16.9682
Classifier Epoch 1: loss = 0.6253
Classifier Epoch 2: loss = 0.4441
Classifier Epoch 3: loss = 0.4031
Classifier Epoch 4: loss = 0.3860
Classifier Epoch 5: loss = 0.3628
Test Accuracy: 85.11%
Macro F1 Score: 0.8536
Epoch 1: avg free-energy loss = 123.2998
Epoch 2: avg free-energy loss = 28.8367
Epoch 3: avg free-energy loss = 21.0347
Epoch 4: avg free-energy loss = 17.3812
Epoch 5: avg free-energy loss = 15.1545
Classifier Epoch 1: loss = 0.6562
Classifier Epoch 2: loss = 0.4371
Classifier Epoch 3: loss = 0.4092
Classifier Epoch 4: loss = 0.3815
Classifier Epoch 5: loss = 0.3629
Test Accuracy: 85.62%
Macro F1 Score: 0.8553
Epoch 1: avg free-energy loss = 5.5166
Epoch 2: avg free-energy loss = -3.1210
Epoch 3: avg free-energy loss = -2.8285
Epoch 4: avg free-energy loss = -2.7155
Epoch 5: avg free-energy loss = -2.5241
Classifier Epoch 1: loss = 0.6257
Classifier Epoch 2: loss = 0.4527
Classifier Epoch 3: loss = 0.4197
Classifier Epoch 4: loss = 0.3943
Classifier Epoch 5: loss = 0.3813
Test Accuracy: 84.57%
Macro F1 Score: 0.8453
Epoch 1: avg free-energy loss = 13.2511
Epoch 2: avg free-energy loss = 2.9139
Epoch 3: avg free-energy loss = 2.1357
Epoch 4: avg free-energy loss = 1.7060
Epoch 5: avg free-energy loss = 1.4679
Classifier Epoch 1: loss = 0.5659
Classifier Epoch 2: loss = 0.4327
Classifier Epoch 3: loss = 0.3950
Classifier Epoch 4: loss = 0.3760
Classifier Epoch 5: loss = 0.3569
Test Accuracy: 86.06%
Macro F1 Score: 0.8607
Epoch 1: avg free-energy loss = 46.4948
Epoch 2: avg free-energy loss = 16.7855
Epoch 3: avg free-energy loss = 13.4336
Epoch 4: avg free-energy loss = 11.5313
Epoch 5: avg free-energy loss = 10.2795
Classifier Epoch 1: loss = 0.5208
Classifier Epoch 2: loss = 0.3985
Classifier Epoch 3: loss = 0.3659
Classifier Epoch 4: loss = 0.3468
Classifier Epoch 5: loss = 0.3288
Test Accuracy: 84.70%
Macro F1 Score: 0.8453
Epoch 1: avg free-energy loss = 42.8889
Epoch 2: avg free-energy loss = 11.5159
Epoch 3: avg free-energy loss = 8.3389
Epoch 4: avg free-energy loss = 6.9427
Epoch 5: avg free-energy loss = 5.9641
Classifier Epoch 1: loss = 0.5889
Classifier Epoch 2: loss = 0.4289
Classifier Epoch 3: loss = 0.3906
Classifier Epoch 4: loss = 0.3648
Classifier Epoch 5: loss = 0.3476
Test Accuracy: 86.34%
Macro F1 Score: 0.8627
Epoch 1: avg free-energy loss = 11.6034
Epoch 2: avg free-energy loss = -2.5283
Epoch 3: avg free-energy loss = -2.4783
Epoch 4: avg free-energy loss = -2.5822
Epoch 5: avg free-energy loss = -2.6208
Classifier Epoch 1: loss = 0.6669
Classifier Epoch 2: loss = 0.4696
Classifier Epoch 3: loss = 0.4285
Classifier Epoch 4: loss = 0.4026
Classifier Epoch 5: loss = 0.3877
Test Accuracy: 84.96%
Macro F1 Score: 0.8497
Epoch 1: avg free-energy loss = 88.5327
Epoch 2: avg free-energy loss = 24.7575
Epoch 3: avg free-energy loss = 17.1411
Epoch 4: avg free-energy loss = 13.7513
Epoch 5: avg free-energy loss = 11.7448
Classifier Epoch 1: loss = 0.5639
Classifier Epoch 2: loss = 0.4209
Classifier Epoch 3: loss = 0.3793
Classifier Epoch 4: loss = 0.3687
Classifier Epoch 5: loss = 0.3475
Test Accuracy: 86.94%
Macro F1 Score: 0.8699
Epoch 1: avg free-energy loss = 137.5824
Epoch 2: avg free-energy loss = 40.1476
Epoch 3: avg free-energy loss = 27.0336
Epoch 4: avg free-energy loss = 20.9861
Epoch 5: avg free-energy loss = 17.9831
Classifier Epoch 1: loss = 0.5684
Classifier Epoch 2: loss = 0.4169
Classifier Epoch 3: loss = 0.3803
Classifier Epoch 4: loss = 0.3669
Classifier Epoch 5: loss = 0.3476
Test Accuracy: 85.63%
Macro F1 Score: 0.8532
Epoch 1: avg free-energy loss = 105.2765
Epoch 2: avg free-energy loss = 28.2761
Epoch 3: avg free-energy loss = 19.5001
Epoch 4: avg free-energy loss = 15.3584
Epoch 5: avg free-energy loss = 13.1081
Classifier Epoch 1: loss = 0.5846
Classifier Epoch 2: loss = 0.4368
Classifier Epoch 3: loss = 0.4030
Classifier Epoch 4: loss = 0.3809
Classifier Epoch 5: loss = 0.3601
Test Accuracy: 86.25%
Macro F1 Score: 0.8620
Epoch 1: avg free-energy loss = 122.1296
Epoch 2: avg free-energy loss = 33.2799
Epoch 3: avg free-energy loss = 21.3056
Epoch 4: avg free-energy loss = 16.6767
Epoch 5: avg free-energy loss = 13.8188
Classifier Epoch 1: loss = 0.6627
Classifier Epoch 2: loss = 0.4575
Classifier Epoch 3: loss = 0.4158
Classifier Epoch 4: loss = 0.3925
Classifier Epoch 5: loss = 0.3698
Test Accuracy: 84.95%
Macro F1 Score: 0.8465
Epoch 1: avg free-energy loss = 170.8301
Epoch 2: avg free-energy loss = 50.4010
Epoch 3: avg free-energy loss = 34.4344
Epoch 4: avg free-energy loss = 27.1837
Epoch 5: avg free-energy loss = 22.9653
Classifier Epoch 1: loss = 0.5939
Classifier Epoch 2: loss = 0.4194
Classifier Epoch 3: loss = 0.3862
Classifier Epoch 4: loss = 0.3617
Classifier Epoch 5: loss = 0.3449
Test Accuracy: 86.08%
Macro F1 Score: 0.8597
Epoch 1: avg free-energy loss = -14.0955
Epoch 2: avg free-energy loss = -14.3351
Epoch 3: avg free-energy loss = -12.4782
Epoch 4: avg free-energy loss = -11.4570
Epoch 5: avg free-energy loss = -10.5970
Classifier Epoch 1: loss = 0.6255
Classifier Epoch 2: loss = 0.4611
Classifier Epoch 3: loss = 0.4261
Classifier Epoch 4: loss = 0.4080
Classifier Epoch 5: loss = 0.3885
Test Accuracy: 84.26%
Macro F1 Score: 0.8362
Epoch 1: avg free-energy loss = 44.0412
Epoch 2: avg free-energy loss = 7.4174
Epoch 3: avg free-energy loss = 5.9695
Epoch 4: avg free-energy loss = 4.5568
Epoch 5: avg free-energy loss = 3.8936
Classifier Epoch 1: loss = 0.6950
Classifier Epoch 2: loss = 0.4809
Classifier Epoch 3: loss = 0.4323
Classifier Epoch 4: loss = 0.4102
Classifier Epoch 5: loss = 0.3911
Test Accuracy: 85.42%
Macro F1 Score: 0.8530
Epoch 1: avg free-energy loss = 36.3599
Epoch 2: avg free-energy loss = 13.5479
Epoch 3: avg free-energy loss = 10.6614
Epoch 4: avg free-energy loss = 9.0335
Epoch 5: avg free-energy loss = 8.1207
Classifier Epoch 1: loss = 0.5690
Classifier Epoch 2: loss = 0.4272
Classifier Epoch 3: loss = 0.3881
Classifier Epoch 4: loss = 0.3619
Classifier Epoch 5: loss = 0.3392
Test Accuracy: 86.10%
Macro F1 Score: 0.8599
Epoch 1: avg free-energy loss = 149.3008
Epoch 2: avg free-energy loss = 41.9853
Epoch 3: avg free-energy loss = 27.6949
Epoch 4: avg free-energy loss = 21.0732
Epoch 5: avg free-energy loss = 17.2345
Classifier Epoch 1: loss = 0.5915
Classifier Epoch 2: loss = 0.4293
Classifier Epoch 3: loss = 0.3952
Classifier Epoch 4: loss = 0.3715
Classifier Epoch 5: loss = 0.3585
Test Accuracy: 85.87%
Macro F1 Score: 0.8576
Epoch 1: avg free-energy loss = -4.5546
Epoch 2: avg free-energy loss = -7.6506
Epoch 3: avg free-energy loss = -6.7102
Epoch 4: avg free-energy loss = -6.0152
Epoch 5: avg free-energy loss = -5.5372
Classifier Epoch 1: loss = 0.5983
Classifier Epoch 2: loss = 0.4411
Classifier Epoch 3: loss = 0.4069
Classifier Epoch 4: loss = 0.3867
Classifier Epoch 5: loss = 0.3708
Test Accuracy: 85.06%
Macro F1 Score: 0.8522
Epoch 1: avg free-energy loss = 184.0355
Epoch 2: avg free-energy loss = 35.2832
Epoch 3: avg free-energy loss = 23.2024
Epoch 4: avg free-energy loss = 18.6540
Epoch 5: avg free-energy loss = 15.7964
Classifier Epoch 1: loss = 0.6869
Classifier Epoch 2: loss = 0.4629
Classifier Epoch 3: loss = 0.4154
Classifier Epoch 4: loss = 0.3942
Classifier Epoch 5: loss = 0.3809
Test Accuracy: 85.19%
Macro F1 Score: 0.8506
Epoch 1: avg free-energy loss = 87.4798
Epoch 2: avg free-energy loss = 29.4220
Epoch 3: avg free-energy loss = 22.3986
Epoch 4: avg free-energy loss = 18.7514
Epoch 5: avg free-energy loss = 16.6470
Classifier Epoch 1: loss = 0.5979
Classifier Epoch 2: loss = 0.4341
Classifier Epoch 3: loss = 0.4029
Classifier Epoch 4: loss = 0.3807
Classifier Epoch 5: loss = 0.3651
Test Accuracy: 85.73%
Macro F1 Score: 0.8572
Epoch 1: avg free-energy loss = 19.5072
Epoch 2: avg free-energy loss = 7.3500
Epoch 3: avg free-energy loss = 5.8339
Epoch 4: avg free-energy loss = 5.0947
Epoch 5: avg free-energy loss = 4.5508
Classifier Epoch 1: loss = 0.5519
Classifier Epoch 2: loss = 0.4207
Classifier Epoch 3: loss = 0.3848
Classifier Epoch 4: loss = 0.3666
Classifier Epoch 5: loss = 0.3462
Test Accuracy: 86.52%
Macro F1 Score: 0.8638
Epoch 1: avg free-energy loss = 23.2072
Epoch 2: avg free-energy loss = 8.1844
Epoch 3: avg free-energy loss = 6.4948
Epoch 4: avg free-energy loss = 5.6222
Epoch 5: avg free-energy loss = 5.0531
Classifier Epoch 1: loss = 0.5542
Classifier Epoch 2: loss = 0.4212
Classifier Epoch 3: loss = 0.3911
Classifier Epoch 4: loss = 0.3580
Classifier Epoch 5: loss = 0.3487
Test Accuracy: 86.07%
Macro F1 Score: 0.8559
Epoch 1: avg free-energy loss = 80.4854
Epoch 2: avg free-energy loss = 22.8208
Epoch 3: avg free-energy loss = 16.2181
Epoch 4: avg free-energy loss = 13.2542
Epoch 5: avg free-energy loss = 11.2404
Classifier Epoch 1: loss = 0.5416
Classifier Epoch 2: loss = 0.4123
Classifier Epoch 3: loss = 0.3754
Classifier Epoch 4: loss = 0.3519
Classifier Epoch 5: loss = 0.3364
Test Accuracy: 86.43%
Macro F1 Score: 0.8627
Epoch 1: avg free-energy loss = -15.3423
Epoch 2: avg free-energy loss = -12.9328
Epoch 3: avg free-energy loss = -11.2064
Epoch 4: avg free-energy loss = -9.7787
Epoch 5: avg free-energy loss = -8.4751
Classifier Epoch 1: loss = 0.5575
Classifier Epoch 2: loss = 0.4289
Classifier Epoch 3: loss = 0.3926
Classifier Epoch 4: loss = 0.3761
Classifier Epoch 5: loss = 0.3573
Test Accuracy: 86.07%
Macro F1 Score: 0.8582
Epoch 1: avg free-energy loss = 38.7520
Epoch 2: avg free-energy loss = 13.7721
Epoch 3: avg free-energy loss = 10.8032
Epoch 4: avg free-energy loss = 9.3792
Epoch 5: avg free-energy loss = 8.3153
Classifier Epoch 1: loss = 0.5497
Classifier Epoch 2: loss = 0.4127
Classifier Epoch 3: loss = 0.3809
Classifier Epoch 4: loss = 0.3618
Classifier Epoch 5: loss = 0.3448
Test Accuracy: 85.11%
Macro F1 Score: 0.8481
Epoch 1: avg free-energy loss = 25.0917
Epoch 2: avg free-energy loss = 5.5691
Epoch 3: avg free-energy loss = 4.1193
Epoch 4: avg free-energy loss = 3.3557
Epoch 5: avg free-energy loss = 2.8216
Classifier Epoch 1: loss = 0.6045
Classifier Epoch 2: loss = 0.4459
Classifier Epoch 3: loss = 0.4070
Classifier Epoch 4: loss = 0.3874
Classifier Epoch 5: loss = 0.3706
Test Accuracy: 84.98%
Macro F1 Score: 0.8463
Epoch 1: avg free-energy loss = 11.7029
Epoch 2: avg free-energy loss = 0.9135
Epoch 3: avg free-energy loss = 0.1311
Epoch 4: avg free-energy loss = -0.0756
Epoch 5: avg free-energy loss = 0.0058
Classifier Epoch 1: loss = 1.0872
Classifier Epoch 2: loss = 0.5859
Classifier Epoch 3: loss = 0.5112
Classifier Epoch 4: loss = 0.4759
Classifier Epoch 5: loss = 0.4545
Test Accuracy: 82.76%
Macro F1 Score: 0.8259
Epoch 1: avg free-energy loss = 75.8773
Epoch 2: avg free-energy loss = 21.8856
Epoch 3: avg free-energy loss = 16.5895
Epoch 4: avg free-energy loss = 13.9699
Epoch 5: avg free-energy loss = 12.3491
Classifier Epoch 1: loss = 0.5630
Classifier Epoch 2: loss = 0.4183
Classifier Epoch 3: loss = 0.3716
Classifier Epoch 4: loss = 0.3596
Classifier Epoch 5: loss = 0.3411
Test Accuracy: 84.12%
Macro F1 Score: 0.8365
Epoch 1: avg free-energy loss = 54.9174
Epoch 2: avg free-energy loss = 14.4931
Epoch 3: avg free-energy loss = 10.1470
Epoch 4: avg free-energy loss = 8.2150
Epoch 5: avg free-energy loss = 7.0697
Classifier Epoch 1: loss = 0.6096
Classifier Epoch 2: loss = 0.4435
Classifier Epoch 3: loss = 0.4143
Classifier Epoch 4: loss = 0.3862
Classifier Epoch 5: loss = 0.3648
Test Accuracy: 85.29%
Macro F1 Score: 0.8505
Epoch 1: avg free-energy loss = 1.8340
Epoch 2: avg free-energy loss = -5.3326
Epoch 3: avg free-energy loss = -5.3469
Epoch 4: avg free-energy loss = -5.1137
Epoch 5: avg free-energy loss = -4.7659
Classifier Epoch 1: loss = 0.6124
Classifier Epoch 2: loss = 0.4476
Classifier Epoch 3: loss = 0.4119
Classifier Epoch 4: loss = 0.3928
Classifier Epoch 5: loss = 0.3711
Test Accuracy: 85.16%
Macro F1 Score: 0.8503
Epoch 1: avg free-energy loss = 20.3825
Epoch 2: avg free-energy loss = 8.6612
Epoch 3: avg free-energy loss = 7.0033
Epoch 4: avg free-energy loss = 5.9337
Epoch 5: avg free-energy loss = 5.4066
Classifier Epoch 1: loss = 0.5380
Classifier Epoch 2: loss = 0.4163
Classifier Epoch 3: loss = 0.3821
Classifier Epoch 4: loss = 0.3588
Classifier Epoch 5: loss = 0.3448
Test Accuracy: 86.85%
Macro F1 Score: 0.8680
Epoch 1: avg free-energy loss = 31.5311
Epoch 2: avg free-energy loss = 7.5844
Epoch 3: avg free-energy loss = 6.4449
Epoch 4: avg free-energy loss = 5.4340
Epoch 5: avg free-energy loss = 5.0087
Classifier Epoch 1: loss = 0.6789
Classifier Epoch 2: loss = 0.4783
Classifier Epoch 3: loss = 0.4428
Classifier Epoch 4: loss = 0.4150
Classifier Epoch 5: loss = 0.3989
Test Accuracy: 84.94%
Macro F1 Score: 0.8476
{'num_rbm_epochs': 5, 'batch_size': 420, 'rbm_lr': 0.0803892046509745, 'rbm_hidden': 3381, 'fnn_hidden': 256, 'fnn_lr': 0.0021046741128304017, 'num_classifier_epochs': 5}
86.94
FrozenTrial(number=26, state=1, values=[86.94], datetime_start=datetime.datetime(2025, 3, 28, 16, 44, 14, 213482), datetime_complete=datetime.datetime(2025, 3, 28, 16, 44, 42, 775743), params={'num_rbm_epochs': 5, 'batch_size': 420, 'rbm_lr': 0.0803892046509745, 'rbm_hidden': 3381, 'fnn_hidden': 256, 'fnn_lr': 0.0021046741128304017, 'num_classifier_epochs': 5}, user_attrs={}, system_attrs={}, intermediate_values={}, distributions={'num_rbm_epochs': IntDistribution(high=5, log=False, low=5, step=1), 'batch_size': IntDistribution(high=1024, log=False, low=192, step=1), 'rbm_lr': FloatDistribution(high=0.1, log=False, low=0.05, step=None), 'rbm_hidden': IntDistribution(high=8192, log=False, low=384, step=1), 'fnn_hidden': IntDistribution(high=384, log=False, low=192, step=1), 'fnn_lr': FloatDistribution(high=0.0025, log=False, low=0.0001, step=None), 'num_classifier_epochs': IntDistribution(high=5, log=False, low=5, step=1)}, trial_id=26, value=None)
[I 2025-03-28 16:31:21,441] A new study created in memory with name: no-name-7e800a9f-7085-4533-8667-8eca12c97736
[I 2025-03-28 16:31:39,948] Trial 0 finished with value: 83.44 and parameters: {'num_rbm_epochs': 5, 'batch_size': 978, 'rbm_lr': 0.07685976863360014, 'rbm_hidden': 780, 'fnn_hidden': 237, 'fnn_lr': 0.0019042488508073641, 'num_classifier_epochs': 5}. Best is trial 0 with value: 83.44.
[I 2025-03-28 16:31:58,494] Trial 1 finished with value: 81.93 and parameters: {'num_rbm_epochs': 5, 'batch_size': 841, 'rbm_lr': 0.056149697557266114, 'rbm_hidden': 462, 'fnn_hidden': 287, 'fnn_lr': 0.0010669637567547751, 'num_classifier_epochs': 5}. Best is trial 0 with value: 83.44.
[I 2025-03-28 16:32:34,604] Trial 2 finished with value: 85.08 and parameters: {'num_rbm_epochs': 5, 'batch_size': 658, 'rbm_lr': 0.08392305096214814, 'rbm_hidden': 6552, 'fnn_hidden': 267, 'fnn_lr': 0.0007830687958329613, 'num_classifier_epochs': 5}. Best is trial 2 with value: 85.08.
[I 2025-03-28 16:33:09,570] Trial 3 finished with value: 84.31 and parameters: {'num_rbm_epochs': 5, 'batch_size': 262, 'rbm_lr': 0.05171459844512646, 'rbm_hidden': 3715, 'fnn_hidden': 335, 'fnn_lr': 0.0019490479896028376, 'num_classifier_epochs': 5}. Best is trial 2 with value: 85.08.
[I 2025-03-28 16:33:57,464] Trial 4 finished with value: 84.74 and parameters: {'num_rbm_epochs': 5, 'batch_size': 303, 'rbm_lr': 0.07736123992711064, 'rbm_hidden': 8054, 'fnn_hidden': 245, 'fnn_lr': 0.00031652715400081805, 'num_classifier_epochs': 5}. Best is trial 2 with value: 85.08.
[I 2025-03-28 16:34:44,742] Trial 5 finished with value: 83.91 and parameters: {'num_rbm_epochs': 5, 'batch_size': 229, 'rbm_lr': 0.08512539443059712, 'rbm_hidden': 6476, 'fnn_hidden': 339, 'fnn_lr': 0.0010042356903125973, 'num_classifier_epochs': 5}. Best is trial 2 with value: 85.08.
[I 2025-03-28 16:35:05,006] Trial 6 finished with value: 83.97 and parameters: {'num_rbm_epochs': 5, 'batch_size': 993, 'rbm_lr': 0.09023022044286942, 'rbm_hidden': 1853, 'fnn_hidden': 379, 'fnn_lr': 0.0006805296083867429, 'num_classifier_epochs': 5}. Best is trial 2 with value: 85.08.
[I 2025-03-28 16:35:31,047] Trial 7 finished with value: 86.44 and parameters: {'num_rbm_epochs': 5, 'batch_size': 228, 'rbm_lr': 0.06038847577916692, 'rbm_hidden': 1324, 'fnn_hidden': 245, 'fnn_lr': 0.001829540306247736, 'num_classifier_epochs': 5}. Best is trial 7 with value: 86.44.
[I 2025-03-28 16:36:04,574] Trial 8 finished with value: 85.95 and parameters: {'num_rbm_epochs': 5, 'batch_size': 484, 'rbm_lr': 0.08565557594087289, 'rbm_hidden': 5310, 'fnn_hidden': 366, 'fnn_lr': 0.0012960669625454216, 'num_classifier_epochs': 5}. Best is trial 7 with value: 86.44.
[I 2025-03-28 16:36:33,760] Trial 9 finished with value: 86.43 and parameters: {'num_rbm_epochs': 5, 'batch_size': 244, 'rbm_lr': 0.09539238017567672, 'rbm_hidden': 2437, 'fnn_hidden': 316, 'fnn_lr': 0.001126868049423012, 'num_classifier_epochs': 5}. Best is trial 7 with value: 86.44.
[I 2025-03-28 16:37:01,682] Trial 10 finished with value: 86.48 and parameters: {'num_rbm_epochs': 5, 'batch_size': 443, 'rbm_lr': 0.06479703588450385, 'rbm_hidden': 3276, 'fnn_hidden': 202, 'fnn_lr': 0.002262416032800853, 'num_classifier_epochs': 5}. Best is trial 10 with value: 86.48.
[I 2025-03-28 16:37:30,052] Trial 11 finished with value: 86.77 and parameters: {'num_rbm_epochs': 5, 'batch_size': 449, 'rbm_lr': 0.0635280712474465, 'rbm_hidden': 3457, 'fnn_hidden': 198, 'fnn_lr': 0.00240717987522241, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:37:59,554] Trial 12 finished with value: 86.08 and parameters: {'num_rbm_epochs': 5, 'batch_size': 474, 'rbm_lr': 0.06514844007725767, 'rbm_hidden': 3681, 'fnn_hidden': 201, 'fnn_lr': 0.0024647160931277772, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:38:26,607] Trial 13 finished with value: 85.65 and parameters: {'num_rbm_epochs': 5, 'batch_size': 450, 'rbm_lr': 0.06776334272944196, 'rbm_hidden': 2971, 'fnn_hidden': 192, 'fnn_lr': 0.0024807254692986802, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:38:57,013] Trial 14 finished with value: 85.61 and parameters: {'num_rbm_epochs': 5, 'batch_size': 693, 'rbm_lr': 0.06821903319132003, 'rbm_hidden': 4834, 'fnn_hidden': 218, 'fnn_lr': 0.0021211953434633808, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:39:23,693] Trial 15 finished with value: 85.45 and parameters: {'num_rbm_epochs': 5, 'batch_size': 559, 'rbm_lr': 0.061374934806417356, 'rbm_hidden': 2961, 'fnn_hidden': 217, 'fnn_lr': 0.0016697531425210197, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:39:56,029] Trial 16 finished with value: 85.8 and parameters: {'num_rbm_epochs': 5, 'batch_size': 370, 'rbm_lr': 0.07183561912636137, 'rbm_hidden': 4051, 'fnn_hidden': 271, 'fnn_lr': 0.0015420604669476449, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:40:31,161] Trial 17 finished with value: 84.43 and parameters: {'num_rbm_epochs': 5, 'batch_size': 382, 'rbm_lr': 0.05193384554456661, 'rbm_hidden': 5232, 'fnn_hidden': 218, 'fnn_lr': 0.0021957625495489836, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:40:55,493] Trial 18 finished with value: 85.17 and parameters: {'num_rbm_epochs': 5, 'batch_size': 732, 'rbm_lr': 0.0601345265371627, 'rbm_hidden': 2468, 'fnn_hidden': 195, 'fnn_lr': 0.002215938520984163, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:41:25,629] Trial 19 finished with value: 85.11 and parameters: {'num_rbm_epochs': 5, 'batch_size': 579, 'rbm_lr': 0.07300285659006364, 'rbm_hidden': 4236, 'fnn_hidden': 261, 'fnn_lr': 0.0015739945469558448, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:42:05,863] Trial 20 finished with value: 85.62 and parameters: {'num_rbm_epochs': 5, 'batch_size': 366, 'rbm_lr': 0.05657302078966238, 'rbm_hidden': 6190, 'fnn_hidden': 302, 'fnn_lr': 0.0023177000521958156, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:42:27,541] Trial 21 finished with value: 84.57 and parameters: {'num_rbm_epochs': 5, 'batch_size': 526, 'rbm_lr': 0.06314711024848019, 'rbm_hidden': 1319, 'fnn_hidden': 240, 'fnn_lr': 0.0018888821833494105, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:42:52,038] Trial 22 finished with value: 86.06 and parameters: {'num_rbm_epochs': 5, 'batch_size': 333, 'rbm_lr': 0.05779878497371583, 'rbm_hidden': 1639, 'fnn_hidden': 227, 'fnn_lr': 0.002006485444042037, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:43:28,293] Trial 23 finished with value: 84.7 and parameters: {'num_rbm_epochs': 5, 'batch_size': 195, 'rbm_lr': 0.06758320673233065, 'rbm_hidden': 3102, 'fnn_hidden': 204, 'fnn_lr': 0.0016767186968677668, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:43:52,729] Trial 24 finished with value: 86.34 and parameters: {'num_rbm_epochs': 5, 'batch_size': 443, 'rbm_lr': 0.06359326976695494, 'rbm_hidden': 2332, 'fnn_hidden': 244, 'fnn_lr': 0.002366610247557912, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:44:14,213] Trial 25 finished with value: 84.96 and parameters: {'num_rbm_epochs': 5, 'batch_size': 626, 'rbm_lr': 0.07109375377447083, 'rbm_hidden': 1350, 'fnn_hidden': 211, 'fnn_lr': 0.00178124286827835, 'num_classifier_epochs': 5}. Best is trial 11 with value: 86.77.
[I 2025-03-28 16:44:42,775] Trial 26 finished with value: 86.94 and parameters: {'num_rbm_epochs': 5, 'batch_size': 420, 'rbm_lr': 0.0803892046509745, 'rbm_hidden': 3381, 'fnn_hidden': 256, 'fnn_lr': 0.0021046741128304017, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:45:16,326] Trial 27 finished with value: 85.63 and parameters: {'num_rbm_epochs': 5, 'batch_size': 415, 'rbm_lr': 0.0760341429394765, 'rbm_hidden': 4646, 'fnn_hidden': 277, 'fnn_lr': 0.002127170193911178, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:45:44,672] Trial 28 finished with value: 86.25 and parameters: {'num_rbm_epochs': 5, 'batch_size': 513, 'rbm_lr': 0.07819930820339267, 'rbm_hidden': 3528, 'fnn_hidden': 257, 'fnn_lr': 0.0013928438939582132, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:46:10,733] Trial 29 finished with value: 84.95 and parameters: {'num_rbm_epochs': 5, 'batch_size': 755, 'rbm_lr': 0.07942647189415598, 'rbm_hidden': 3292, 'fnn_hidden': 226, 'fnn_lr': 0.002043828790787277, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:46:51,821] Trial 30 finished with value: 86.08 and parameters: {'num_rbm_epochs': 5, 'batch_size': 321, 'rbm_lr': 0.08186871219139809, 'rbm_hidden': 5893, 'fnn_hidden': 228, 'fnn_lr': 0.002290381822237252, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:47:12,952] Trial 31 finished with value: 84.26 and parameters: {'num_rbm_epochs': 5, 'batch_size': 419, 'rbm_lr': 0.05934462152347368, 'rbm_hidden': 731, 'fnn_hidden': 257, 'fnn_lr': 0.001840087350007031, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:47:34,786] Trial 32 finished with value: 85.42 and parameters: {'num_rbm_epochs': 5, 'batch_size': 908, 'rbm_lr': 0.06571480741451359, 'rbm_hidden': 1878, 'fnn_hidden': 284, 'fnn_lr': 0.0022925355152995383, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:48:04,924] Trial 33 finished with value: 86.1 and parameters: {'num_rbm_epochs': 5, 'batch_size': 287, 'rbm_lr': 0.05340130938663253, 'rbm_hidden': 2701, 'fnn_hidden': 301, 'fnn_lr': 0.0024647338173498577, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:48:33,413] Trial 34 finished with value: 85.87 and parameters: {'num_rbm_epochs': 5, 'batch_size': 597, 'rbm_lr': 0.08950918746667691, 'rbm_hidden': 3917, 'fnn_hidden': 206, 'fnn_lr': 0.0020608827395373684, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:48:55,594] Trial 35 finished with value: 85.06 and parameters: {'num_rbm_epochs': 5, 'batch_size': 393, 'rbm_lr': 0.07025938242651668, 'rbm_hidden': 1017, 'fnn_hidden': 234, 'fnn_lr': 0.001912352872565176, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:49:36,383] Trial 36 finished with value: 85.19 and parameters: {'num_rbm_epochs': 5, 'batch_size': 538, 'rbm_lr': 0.055149846639407245, 'rbm_hidden': 7204, 'fnn_hidden': 248, 'fnn_lr': 0.001756580048101243, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:50:17,003] Trial 37 finished with value: 85.73 and parameters: {'num_rbm_epochs': 5, 'batch_size': 197, 'rbm_lr': 0.07478335307044473, 'rbm_hidden': 4574, 'fnn_hidden': 250, 'fnn_lr': 0.0003289509829743759, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:50:44,811] Trial 38 finished with value: 86.52 and parameters: {'num_rbm_epochs': 5, 'batch_size': 271, 'rbm_lr': 0.05051354616221276, 'rbm_hidden': 2035, 'fnn_hidden': 235, 'fnn_lr': 0.0022094590698111274, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:51:12,045] Trial 39 finished with value: 86.07 and parameters: {'num_rbm_epochs': 5, 'batch_size': 289, 'rbm_lr': 0.05092809628763094, 'rbm_hidden': 2125, 'fnn_hidden': 235, 'fnn_lr': 0.002387358167458787, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:51:42,199] Trial 40 finished with value: 86.43 and parameters: {'num_rbm_epochs': 5, 'batch_size': 339, 'rbm_lr': 0.08173771866243773, 'rbm_hidden': 3358, 'fnn_hidden': 213, 'fnn_lr': 0.002198761976074191, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:52:06,075] Trial 41 finished with value: 86.07 and parameters: {'num_rbm_epochs': 5, 'batch_size': 244, 'rbm_lr': 0.061155297945488796, 'rbm_hidden': 652, 'fnn_hidden': 270, 'fnn_lr': 0.001938790689717326, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:52:36,261] Trial 42 finished with value: 85.11 and parameters: {'num_rbm_epochs': 5, 'batch_size': 272, 'rbm_lr': 0.054546146445554416, 'rbm_hidden': 2846, 'fnn_hidden': 202, 'fnn_lr': 0.0021845015895273624, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:53:00,157] Trial 43 finished with value: 84.98 and parameters: {'num_rbm_epochs': 5, 'batch_size': 479, 'rbm_lr': 0.05722994837175465, 'rbm_hidden': 1867, 'fnn_hidden': 230, 'fnn_lr': 0.0020059505572936343, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:53:27,020] Trial 44 finished with value: 82.76 and parameters: {'num_rbm_epochs': 5, 'batch_size': 241, 'rbm_lr': 0.09772530064668687, 'rbm_hidden': 1459, 'fnn_hidden': 195, 'fnn_lr': 0.00010373162465527382, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:53:59,057] Trial 45 finished with value: 84.12 and parameters: {'num_rbm_epochs': 5, 'batch_size': 319, 'rbm_lr': 0.06365930229465193, 'rbm_hidden': 3710, 'fnn_hidden': 222, 'fnn_lr': 0.002394146804012174, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:54:26,014] Trial 46 finished with value: 85.29 and parameters: {'num_rbm_epochs': 5, 'batch_size': 454, 'rbm_lr': 0.0739325096649262, 'rbm_hidden': 2477, 'fnn_hidden': 341, 'fnn_lr': 0.0008273701764834851, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:54:47,959] Trial 47 finished with value: 85.16 and parameters: {'num_rbm_epochs': 5, 'batch_size': 498, 'rbm_lr': 0.08739263811387749, 'rbm_hidden': 1111, 'fnn_hidden': 239, 'fnn_lr': 0.0021139215380483072, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:55:18,493] Trial 48 finished with value: 86.85 and parameters: {'num_rbm_epochs': 5, 'batch_size': 218, 'rbm_lr': 0.05023961955170966, 'rbm_hidden': 2133, 'fnn_hidden': 253, 'fnn_lr': 0.0013908896355394004, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.
[I 2025-03-28 16:55:41,367] Trial 49 finished with value: 84.94 and parameters: {'num_rbm_epochs': 5, 'batch_size': 637, 'rbm_lr': 0.05202843720829133, 'rbm_hidden': 2113, 'fnn_hidden': 262, 'fnn_lr': 0.0010786691647612355, 'num_classifier_epochs': 5}. Best is trial 26 with value: 86.94.